31 research outputs found
Revealing the Vicious Circle of Disengaged User Acceptance: A SaaS Provider's Perspective
User acceptance tests (UAT) are an integral part of many different software engineering methodologies. In this paper, we examine the influence of UATs on the relationship between users and Software-as-a-Service (SaaS) applications, which are continuously delivered rather than rolled out during a one-off signoff process. Based on an exploratory qualitative field study at a multinational SaaS provider in Denmark, we show that UATs often address the wrong problem in that positive user acceptance may actually indicate a negative user experience. Hence, SaaS providers should be careful not to rest on what we term disengaged user acceptance. Instead, we outline an approach that purposefully queries users for ambivalent emotions that evoke constructive criticism, in order to facilitate a discourse that favors the continuous innovation of a SaaS system. We discuss theoretical and practical implications of our approach for the study of user engagement in testing SaaS applications
Understanding Hackers' Work: An Empirical Study of Offensive Security Practitioners
Offensive security-tests are a common way to pro-actively discover potential
vulnerabilities. They are performed by specialists, often called
penetration-testers or white-hat hackers. The chronic lack of available
white-hat hackers prevents sufficient security test coverage of software.
Research into automation tries to alleviate this problem by improving the
efficiency of security testing. To achieve this, researchers and tool builders
need a solid understanding of how hackers work, their assumptions, and pain
points.
In this paper, we present a first data-driven exploratory qualitative study
of twelve security professionals, their work and problems occurring therein. We
perform a thematic analysis to gain insights into the execution of security
assignments, hackers' thought processes and encountered challenges.
This analysis allows us to conclude with recommendations for researchers and
tool builders to increase the efficiency of their automation and identify novel
areas for research.Comment: 11 pages, we have chosen the category "Software Engineering" and not
"Cryptography and Security" as while this is a paper about security
practices, we target software engineering researcher
Getting pwn'd by AI: Penetration Testing with Large Language Models
The field of software security testing, more specifically penetration
testing, is an activity that requires high levels of expertise and involves
many manual testing and analysis steps. This paper explores the potential usage
of large-language models, such as GPT3.5, to augment penetration testers with
AI sparring partners. We explore the feasibility of supplementing penetration
testers with AI models for two distinct use cases: high-level task planning for
security testing assignments and low-level vulnerability hunting within a
vulnerable virtual machine. For the latter, we implemented a closed-feedback
loop between LLM-generated low-level actions with a vulnerable virtual machine
(connected through SSH) and allowed the LLM to analyze the machine state for
vulnerabilities and suggest concrete attack vectors which were automatically
executed within the virtual machine. We discuss promising initial results,
detail avenues for improvement, and close deliberating on the ethics of
providing AI-based sparring partners
Software runtime analytics for developers: extending developers' mental models by runtime dimensions
An Exploratory Study of Ad Hoc Parsers in Python
Background: Ad hoc parsers are pieces of code that use common string
functions like split, trim, or slice to effectively perform parsing. Whether it
is handling command-line arguments, reading configuration files, parsing custom
file formats, or any number of other minor string processing tasks, ad hoc
parsing is ubiquitous -- yet poorly understood.
Objective: This study aims to reveal the common syntactic and semantic
characteristics of ad hoc parsing code in real world Python projects. Our goal
is to understand the nature of ad hoc parsers in order to inform future program
analysis efforts in this area.
Method: We plan to conduct an exploratory study based on large-scale mining
of open-source Python repositories from GitHub. We will use program slicing to
identify program fragments related to ad hoc parsing and analyze these parsers
and their surrounding contexts across 9 research questions using 25 initial
syntactic and semantic metrics. Beyond descriptive statistics, we will attempt
to identify common parsing patterns by cluster analysis.Comment: 5 pages, accepted as a registered report for MSR 2023 with Continuity
Acceptance (CA
Evaluating LLMs for Privilege-Escalation Scenarios
Penetration testing, an essential component of cybersecurity, allows
organizations to proactively identify and remediate vulnerabilities in their
systems, thus bolstering their defense mechanisms against potential
cyberattacks. One recent advancement in the realm of penetration testing is the
utilization of Language Models (LLMs). We explore the intersection of LLMs and
penetration testing to gain insight into their capabilities and challenges in
the context of privilige escalation. We create an automated Linux
privilege-escalation benchmark utilizing local virtual machines. We introduce
an LLM-guided privilege-escalation tool designed for evaluating different LLMs
and prompt strategies against our benchmark. We analyze the impact of different
prompt designs, the benefits of in-context learning, and the advantages of
offering high-level guidance to LLMs. We discuss challenging areas for LLMs,
including maintaining focus during testing, coping with errors, and finally
comparing them with both stochastic parrots as well as with human hackers
All the Services Large and Micro: Revisiting Industrial Practice
Services computing is both, an academic field of study looking back at close to 15 years of fundamental research, as well as a vibrant area of industrial software engineering. Industrial practice in this area is notorious for its ever-changing nature, with the state of the art changing almost on a yearly basis based on the ebb and flow of various hypes and trends. In this paper, we provide a look "across the wall" into industrial services computing. We conducted an empirical study based on the service ecosystem of 42 companies, and report, among other aspects, how service-to-service communication is Abstract. Services computing is both, an academic field of study looking back at close to 15 years of fundamental research, as well as a vibrant area of industrial software engineering. Industrial practice in this area is notorious for its ever-changing nature, with the state of the art changing almost on a yearly basis based on the ebb and flow of various hypes and trends. In this paper, we provide a look "across the wall" into industrial services computing. We conducted an empirical study based on the service ecosystem of 42 companies, and report, among other aspects, how service-to-service communication is implemented, how service discovery works in practice, what Quality-of-Service metrics practitioners are most interested in, and how services are deployed and hosted. We argue that not all assumptions that are typical in academic papers in the field are justified based on industrial practice, and conclude the paper with recommendations for future research that is more aligned with the services industry